#the bias against women in technology
Explore tagged Tumblr posts
Text
Here’s a thought: if men didn’t attack and rape us and oppress us for literally all of human history, if they didn’t mock our pain and masturbate to our torture, if they didn’t laugh with their buddies about the violence inflicted on women, if they stopped selling our bodies like commodities, if they just LEFT US ALONE FFS, then maybe, JUST MAYBE, we might have something nice to say about them.
Don’t you dare tell us women are responsible for mens self-confidence. The only person EVER responsible for your self-confidence is…..you���ll never believe this….your fucking SELF.
'women aren't responsible for male loneliness' sure but i think the fact that you in particular can't say a positive thing about a man and only publicly talk about how much they annoy you has a lot to do with the self-confidence issues that root at this loneliness
#holy SHIT#I was not expecting to be this mad first thing in the morning#but here we are#but guess what?#I’m gonna close my tabs and go make breakfast#and forget about this idiot#just like he could do with the nasty mean women online#guess what I CAN’T close the tab on?#men harassing me on the streets#men attacking me for saying no to them#men stalking me at night and following me to my car#men buying and selling women’s bodies#young girls being groomed#random women being punched on the street by men#domestic violence#child brides#honor killing#fgm#rape#wage inequality#workplace harassment#a lack of studies on how medicines interact with women’s bodies#the fact that safety features are only tested on male crash test dummies#the bias against women in technology#like…..dude#you can close the tab on the mean scary women online.#WE CANNOT EVER SHUT THE TABS ON WHAT MEN ARE CONSTANTLY FUCKING DOING TO US#FUCK
2K notes
·
View notes
Text
The concept of technological neutrality is outdated and must be re-examined. The idea that technology can exist in a vacuum, free from the bias and intent of its creator, is becoming irrelevant as the boundaries between technology and society disappear. The more technology blends with our personal lives, the less the distinction matters; artificial intelligence and streaming pornography are not the equivalent of technology that is, say, a spoon, or a book. We continue to lump many human creations under this term while technology becomes more interactive and enables profound human rights abuses. Moreover, abusive behavior that takes place online is intertwined with notions of "free speech," meaning that the existence of the medium placed between the perpetrator and victim transforms abuse into the self-expression of the abuser. Digital sexual abuse and projected rape then become classified as "free speech" simply because they are published online, and the theft of women's images to profit from their dehumanization becomes a protected right in the eyes of the law.
It is important to consider how many of our modern technologies we take for granted were advanced by men seeking to sexually exploit women, including streaming video, deepfake technology, and certain aspects of the internet. Therefore, it should not be surprising that men eagerly use these tools to further their subjugation of women and children and profit from their rape, both physical and mental. As Andrea Dworkin explained:
When your rape is entertainment, your worthlessness is absolute. You have reached the nadir of social worthlessness. The civil impact of pornography on women is staggering. One lives inside a nightmare of sexual abuse that is both actual and potential, and you have the great joy of knowing that your nightmare is someone else's freedom and someone else's fun.
It will not be enough to continue to challenge emerging technologies as each human rights violation against women emerges. If we assume that approach, we will always be on the defensive, and we can't possibly keep up. We must 1) challenge the long-held idea of the neutrality of technology and 2) continue to work within our circles to change minds about the nature of women's reality and resist the commodification of women wherever we see it. We have truth on our side, whereas the images in pornography and shared elsewhere online are based in lies.
The men who are spreading these lies, profiting from digital sexual abuse, encouraging and participating in rape as a form of entertainment, are not separate from the technology they wield to do so: men are wielding emerging media to terrorize women. That such technology is considered to have a fundamental right to exist, whereas women do not have a fundamental right to safety, dignity, or bodily integrity is a violation and a hypocrisy that must be confronted.
-Genevieve Gluck, “Creative Control: Woman as Intellectual Property” in Spinning And Weaving: Radical Feminism for the 21st Century
#Genevieve Gluck#anti pornography#male mind#male sexuality#female oppression#technology#human rights
25 notes
·
View notes
Text
There’s a growing trend of people and organizations rejecting the unsolicited imposition of AI in their lives. In December 2023, the The New York Times sued OpenAI and Microsoft for copyright infringement. In March 2024, three authors filed a class action in California against Nvidia for allegedly training its AI platform NeMo on their copyrighted work. Two months later, the A-list actress Scarlett Johansson sent a legal letter to OpenAI when she realized its new ChatGPT voice was “eerily similar” to hers.
The technology isn’t the problem here. The power dynamic is. People understand that this technology is being built on their data, often without our permission. It’s no wonder that public confidence in AI is declining. A recent study by Pew Research shows that more than half of Americans are more concerned than they are excited about AI, a sentiment echoed by a majority of people from Central and South American, African, and Middle Eastern countries in a World Risk Poll.
In 2025, we will see people demand more control over how AI is used. How will that be achieved? One example is red teaming, a practice borrowed from the military and used in cybersecurity. In a red teaming exercise, external experts are asked to “infiltrate” or break a system. It acts as a test of where your defenses can go wrong, so you can fix them.
Red teaming is used by major AI companies to find issues in their models, but isn’t yet widespread as a practice for public use. That will change in 2025.
The law firm DLA Piper, for instance, now uses red teaming with lawyers to test directly whether AI systems are in compliance with legal frameworks. My nonprofit, Humane Intelligence, builds red teaming exercises with nontechnical experts, governments, and civil society organizations to test AI for discrimination and bias. In 2023, we conducted a 2,200-person red teaming exercise that was supported by the White House. In 2025, our red teaming events will draw on the lived experience of regular people to evaluate AI models for Islamophobia, and for their capacity to enable online harassment against women.
Overwhelmingly, when I host one of these exercises, the most common question I’m asked is how we can evolve from identifying problems to fixing problems ourselves. In other words, people want a right to repair.
An AI right to repair might look like this—a user could have the ability to run diagnostics on an AI, report any anomalies, and see when they are fixed by the company. Third party-groups, like ethical hackers, could create patches or fixes for problems that anyone can access. Or, you could hire an independent accredited party to evaluate an AI system and customize it for you.
While this is an abstract idea today, we’re setting the stage for a right to repair to be a reality in the future. Overturning the current, dangerous power dynamic will take some work—we’re rapidly pushed to normalize a world in which AI companies simply put new and untested AI models into real-world systems, with regular people as the collateral damage. A right to repair gives every person the ability to control how AI is used in their lives. 2024 was the year the world woke up to the pervasiveness and impact of AI. 2025 is the year we demand our rights.
12 notes
·
View notes
Text
Blog Post 4 - 9/19
How can we use our understanding of intersectionality to understand the people around us?
To understand intersectionality we need to look at how it works and affects people. Intersectionality involves the different groups that make up someone’s identities; this can be race, gender, religion, age, and sexuality. By understanding one you need to understand all and how they interact with one another. Kimberle Crenshaw mentions how racial stereotypes not only impact someone’s life outside but also within the classroom which can cause disruptions to their education. Using intersectionality can also help us find flaws in the justice system.
What did technology do for African Americans?
When technology first started off the atmosphere of certain programs were mostly pertaining to White Americans. This goes back to last week's discussion post about how algorithms have made assumptions about black and brown communities not having access to the internet. As a result of African Americans not having access to the internet, they learned very late as to what the internet contains. In the end, African Americans remained resilient against the biased algorithms that were created to exclude them from society. Just like any other community African Americans have used the internet to find one another and share their experiences with each other.
Does bias play a role in technology?
Yes, bias does play a role in technology. Technology is a man-made art and is coded to do certain things and is limited to what it can do. As the “Race After Technology” mentions scientists created a study at Princeton University to test if the algorithm they’re using is biased towards White-sounding names as compared to Black names. As we mentioned in class, bias is also shown when Black women use a facial scan and is shown as an error.
Does privilege come with certain names?
In California, there's an array of different cultures, and with other cultures come different names, meaning something different to everyone. As there are many different ethnicities, stereotypes are then created, not only based on what's seen but also on the moment we're born. Stereotypes come along with assumptions, which is very harmful. In "Race After Technology", a hashtag is mentioned called "CrimingWhileWhite" where people will post themselves doing illegal activities that will usually get a person of color arrested. So yes, the privilege does come with certain names.
Everett, A. "The Revolution will be Digitalized".
Benjamin. "Race After Technology".
Noble. "Algorithms of Oppression".
Crenshaw, K. "Defines Intersectionality".
6 notes
·
View notes
Text
Blog post due 11/7
How did the September 11 attacks change the anti-globalization movement, and how did protesters operate?
The September 11 attacks led to the cancellation of major events, including the Annual Meetings of the World Bank and the IMF, which were intended to draw protests. While these attacks shifted national security and foreign policy priorities, they did not significantly alter the agenda of the World Bank or the overall anti-globalization movement. Instead, activists adapted their strategies by transitioning from physical protests to online activism. The movement increasingly utilized the internet for communication, education, and mobilization, including the creation of dedicated websites for protests and alternative media platforms like the Independent Media Center. This shift highlights how activists can remain resilient and resourceful in facing external challenges, finding new ways to organize and express dissent while leveraging technology to enhance their visibility and impact.
What problems arise from the digital divide between big companies and activist groups regarding online protests?
The digital divide limits the reach and impact of online protests against corporations like the World Bank. While companies have the resources to manage their online image, activists often lack funding and technology, making online protests harder to organize. Many activists face basic tech challenges, like downloading emails, and rely on tools that may actually support corporate interests. This divide weakens the activist movement, making it harder to challenge corporate narratives or mobilize resistance. It highlights the need for activists to find new ways of digital organizing that don’t depend on corporate platforms.
How does Black Twitter help marginalized voices and show biases in whose stories get the most attention?
Black Twitter is a powerful tool for sharing stories and organizing against injustice, giving marginalized groups a space to be heard. It allows people to rally around shared experiences and push against mainstream biases. However, stories involving Black men facing police violence often get more attention than cases involving Black women, LGBTQ+ individuals, or immigrants. This reflects broader social biases about who is most “at risk” and whose lives “matter” most in public eyes. Movements like #SayHerName aim to address these gaps, but the unequal focus shows that biases can exist even in spaces built to empower marginalized voices.
Why do social media movements like #BlackLivesMatter bring quick attention but struggle to create lasting change?
Social media movements quickly spread awareness by enabling rapid information sharing and emotional connection with specific cases of injustice. This ability to rally support around a cause, often through viral hashtags, mobilizes protests and public outcry in hours or days, effectively pushing issues into the national conversation. However, because social media is fast-paced and driven by current trends, attention can fade as new topics emerge, leading to what might be called "short-lived moments of resistance." While these moments are powerful for raising awareness, they don’t always lead to long-term, structural change. Additionally, social media’s reliance on “accepted truths” about whose lives are at risk may inadvertently limit the depth and duration of its impact, as some instances or groups receive less attention. The temporary nature of viral content highlights the need for organized, sustained offline efforts to complement social media activism's immediate reach.
Lee, L. (2017). Black Twitter: A response to bias in mainstream media. Social Sciences, 6(1), 26. https://doi.org/10.3390/socsci6010026
Vegh, S. (2007). Cyberprotesting globalization: A case of online activism. Governance and Information Technology, 208–212. https://doi.org/10.7551/mitpress/7473.003.0027
5 notes
·
View notes
Text
Bibliography: books posted on this blog in 2024
Sara AHMED (2010): The Promise of Happiness
Cat BOHANNON (2023): Eve: How the Female Body Drove 200 Million Years of Human Evolution
Holly BRIDGES (2014): Reframe Your Thinking Around Autism: How the Polyvagal Theory and Brain Plasticity Help Us Make Sense of Autism
Johann CHAPOUTOT (2024): The Law of Blood: Thinking and Acting as a Nazi
Caroline CRIADO-PEREZ (2019): Invisible Women: Exposing Data Bias in a World Designed for Men
Gavin DE BECKER (2000): Survival Signals That Protect Us from Violence
Virginie DESPENTES (2006): King Kong Theory
Annie ERNAUX (2000): Happening
Lisa FELDMAN BARRETT (2017): How Emotions Are Made: The Secret Life of the Brain
Shaun GALLAGHER (2012): Phenomenology
David GRAEBER (2015): The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy
Henrik HASS and Torben HANSEN (2023): Unconscious Intelligence in Cybernetic Psychology
Yuval Noah HARARI (2024): Nexus: A Brief History of Information Networks from the Stone Age to AI
Sarah HENDRICKX (2015): Women and Girls with Autism Spectrum Disorder: Understanding Life Experiences from Early Childhood to Old Age
Sarah HILL (2019): This Is Your Brain on Birth Control: The Surprising Science of Women, Hormones, and the Law of Unintended Consequences
Victor HUGO (1831): Notre-Dame de Paris
Luke JENNINGS (2017): Killing Eve: Codename Villanelle
Bernardo KASTRUP (2021): Decoding Jung’s Metaphysics: The Archetypal Semantics of an Experiential Universe
Roman KOTOV, Thomas JOINER, Norman SCHMIDT (2004): Taxometrics: Toward a new diagnostic scheme for psychopathology
Benjamin LIPSCOMB (2021): The Women are Up to Something: How Elizabeth Anscombe, Philippa Foot, Mary Midgley, and Iris Murdoch Revolutionized Ethics
Dorian LYNSKEY (2024): Everything Must Go: The Stories We Tell About The End of the World
Kate MANNE (2024): Unshrinking: How to Fight Fatphobia
Mario MIKULINCER (1994): Human Learned Helplessness: A Coping Perspective
Jenara NERENBERG (2020): Divergent Mind: Thriving in a World That Wasn’t Designed for
Lucy NEVILLE (2018): Girls Who Like Boys Who Like Boys: Women and Gay Male Pornography and Erotica
Peggy ORNSTEIN (2020): Boys & Sex: Young Men on Hookups, Love, Porn, Consent, and Navigating the New Masculinity
Lucile PEYTAVIN (2021): Le coût de la virilité
Lynn PHILLIPS (2000): Flirting with Danger: Young Women’s Reflections on Sexuality and Domination
Stephen PORGES (2017): The Pocket Guide to the Polyvagal Theory: The Transformative Power of Feeling Safe
Joëlle PROUST (2013): The Philosophy of Metacognition: Mental Agency and Self-Awareness
John SARLO: The Mindbody Prescription: Healing the Body, Healing the Pain
Jessica TAYLOR (2022): Sexy But Psycho: How the Patriarchy Uses Women’s Trauma Against Them
Manos TSAKIRIS and Helena DE PREESTER (2018): The Interoceptive Mind: From Homeostasis to Awareness
8 notes
·
View notes
Text
Blog Post #6 (10/10)
How does technology contribute to violence against women?
Women are subjected to gender-based violence in the real world and then on the internet they are often met with only more violence (Hernandez). Women are subjected to everything from unwanted images being sent to them to harassment on social media. Some of the online violence can even leak into the real world with people being stalked by those who found them online or being doxed. With the recent development of AI, the issue has only gotten worse with fake, often explicit, images of women being made and shared without their consent or even knowledge. The story of Ingrid Escamilla is a n example of how the internet can further the harm done to women, when the police leaked the images of her mutilated body the images were shared over and over again on the internet (Hernandez). Even after death she could not escape the violence brought against women in the internet.
How can social media be used to help advocate for women?
Technology can contribute greatly to the violence women are subjected to but it can also provide a space for women to advocate for themselves in a way that has not previously been possible. Activist groups and social movements often use social media to share their stories and spread their message. One way that this is done is through Hashtags on posts which make it easy for women to share their own stories and add their own words to a larger movement. Another way that social media is often used is for safety, when activists are protesting and advocating for change their presence on social media can give them a layer of protection through their visibility and the fact that the world is watching them. Again, the story of Ingrid Escamilla can show the good that social media can used for. When the images of her body where being shared and going viral a movement was started asking artists to create images of a living version of her to drown the images of her body on social media and the movement worked making it so when her name was searched the images the artists created came up first (Hernandez).
Why can the internet never truly be neutral?
The internet was originally funded by the US military and was furthered by universities both of these institutions have been traditionally filled with and created by white men. Then as the internet has developed many of the coders in Silicon Valley are again white men (Noble). With the internet created by and for white men it cannot truly be neutral or equally inclusive to everyone as it was created by biased individuals who’s bias has affected what is created.
How can saying the internet is neutral be harmful?
The internet which has often been claimed to be neutral but when its creators are predominately white men it is not only incorrect but can be harmful to claim it to be such. When these coders claim that gender and race are not an issue and that the internet is neutral, they are ignoring the issue and ignoring the privilege that they have in being able to say that gender and race are non-issues. This leads to then seeing themselves and people like them as the default and the codes they create catering to individuals like them, white men (Noble). When gender and race and the historical and societal impacts of them are ignored, it leads to default discrimination (Benjamin).
Benjamin, R. (2020). Race after technology abolitionist tools for the new Jim code. Polity.
Hunsinger, J., & Senft, T. M. (2015). The Social Media Handbook. Routledge.
Hernández, Dr. Miriam. Digital Defenders: Using Social Media to Challenge Violence Against Women. 9 October. 2024. Presentation
4 notes
·
View notes
Text
Blog Post #6 (Due 10/10)
Why must we acknowledge online violence?
In her talk on Wednesday, Dr. Miriam Hernández explained how women receive much more online violence than men, often in the form of cyberbullying, harassment, revenge porn, doxxing, and more. She even noted that up to 58% of women have experienced some form of online bias. If we want to accept the internet as a major part of society like we already have, we must accept the negative sides of it as well. This point reminded me of dialectical reasoning, the third dimension of critical theory. Dialectical reasoning urges us to analyze things from more than one perspective, so it's important to recognize the serious dangers of the internet.
Why do we need to analyze violence against women from multiple points of view?
Continuing on my previous point, dialectical reasoning is also crucial when considering violence against women. The stories of violence we hear are often highly conflated with bias in the journalism industry. When a story is written, there's room for potential bias from the author, their sources, and any editors who work on the piece. So, by the time the story reaches the public, it's undergone many layers of bias (almost like the telephone game, where the story gets misconstrued a little more with each person it passes). Furthermore , flawed authoritative systems mean the statistics we have on violence against women are inaccurate and incomplete. Many victims don't report the violence they've experienced, and when they do, it's often dismissed. Analyzing this problem from all angles helps us do our best to investigate past bias and any other confounds.
What does white supremacy look like online?
As discussed in Race After Technology, White supremacists often embed their symbols in online posts, such as through hand signs or characters, and this frequent online use only normalizes their presence. The author argues that despite these symbols being blatantly racist and highly problematic, we tend not to monitor them as much simply because they're online. This feeds into the patterns of the New Jim Code, as we've discussed in the last few weeks, as well.
What is the "Black box?"
The "Black box" is used to describe how the social side of technology is often hidden or disguised. In Race After Technology, the author argues that an anti-Black box would link race-neutral technology to the race-neutral rules that allow for them, which allows us to acknowledge the biased and racial position technology undoubtedly holds in our lives.
Benjamin, Ruha. Race After Technology.
Hernández, Miriam. “Digital Defenders: Using Social Media to Challenge Violence Against Women”. WRC and WGST Speaker Series.
6 notes
·
View notes
Text
Blog Post - Week 4:
How does intersectionality have an impact on our society? (Dr. Kimberle Crenshaw - Defines Intersectionality)
In the video, Kimberlé Crenshaw talks about the concept of intersectionality, which emphasizes how a person's experiences, usually surrounding sexism and racism, are shaped by the overlapping of various aspects of their identity, including race, gender, class, and sexuality. It highlights how oppression and discrimination can take many different forms and that all of them can't be fully understood in separation. For example, the video discusses how a Black woman may encounter sexism and racism in ways that are relevant to her identity. Recognizing these connections can help society address inequality and make efforts to promote social justice to be more inclusive to the wide range of everyday situations.
What is old vs new cyberfeminism?
The initial idea behind the cyberfeminism movement was that women would use the internet to rebel against and overthrow the patriarchy. These days, it has grown more complex, recognizing the hierarchical structure that exists in our society and the tools required to break it down, starting at the lowest level. Since it addresses how racism and capitalism connect with the patriarchy and its attempt to marginalize all minorities, not just women, it adopts a more intersectional perspective. For instance, the legal system accuses people of crimes using technology like face recognition, which is biased against people of color because it was developed and implemented by White, biased people. Daniels (2024). One way that new cyberfeminism functions today is realizing how women of color can be impacted by this, even though white women aren't.
What impact does the historical background of racial inequality have on contemporary technology?
Benjamin makes links between modern technologies that uphold racial structures and historical racial control mechanisms like slavery and Jim Crow legislation. In addition to this, societies with a long history of racial oppression make technological advances. For instance, the surveillance technologies that are currently employed in law enforcement have their origins in past techniques for managing marginalized communities, and new technologies might just offer more advanced means of carrying on with these methods.
How does Noble connect the bias of search engines to oppressive historical forms?
In the text, Noble makes a comparison between historical instances of racial and gender discrimination and the biases built into search algorithms. According to her, search engines and other digital platforms are now infected with the same ideologies that have historically discriminated against women, people of color, and other minority groups. Digital spaces continue the historical legacies of colonialism, patriarchy, and white supremacists, making it challenging to separate technology from these solid oppressive systems.
“ Dr. Kimberle Crenshaw - Defines Intersectionality.” YouTube, youtu.be/ViDtnfQ9FHc?si=y0p7v-YMZQ7i3ku2. Accessed 18 Sept. 2024.
Ruth Thornton | Global Product Manager. “How to Use Modules to Build Courses in Canvas.” Instructure, 25 Feb. 2021, www.instructure.com/resources/blog/how-use-modules-build-courses-canvas.
“California State University, Fullerton.” Achieve Greatness: California State University, Fullerton, www.fullerton.edu/. Accessed 19 Sept. 2024.
5 notes
·
View notes
Text
Algorithmic Culture
Whose culture is represented when algorithms monopolize digital content? Because of factors of economic inequality, race, and cultural hegemony, it is often the kind of content most digestible to people in well off, suburban, white communities.
This effect can be seen even without algorithms in the film industry, in which most movies in Hollywood and even globally (especially between WW2 to the 1990's) portray a white westerner perspective, worldview, and lead because it is mainly white westerners who are the targets audience since people who are poorer and black and brown in the global south do not have the funds or infrastructure to be a viable market. The exception is Bollywood, and even then there is much criticism of classism within it as it needs to appeal to markets of people with money if it wants to see profit. The article provided in class "The Power of Algorithms" provides an amazing framework for this concept, using the phrase "technological redlining" which is a fantastic analogy i will certainly be using in the future. There was a promise given by FaceBook at its launch, the idea that the sharing of content and the conglomeration of data would be a democratizing force in the marketplace of ideas for debate, bringing truth and good argumentation forward with popular support. In contrast we see that is instead has had the effect of promoting even further, through the profit motive, status quo and monoculture. As stated in the article "While we often think of terms such as “big data” and “algorithms” as being benign, neutral, or objective, they are anything but." What was especially concerning, this article points out, is the section showing bias against black women on the internet, categorically the most socioeconomically dispossessed demographic in American society. Upon searching the term "black girls", the author found immediately the site "HotBlackPussy dot com" showcasing the economic exploitation and fetishization of black women largely. Upon my search of black girls after reading this article, i was overwhelmingly pleasantly surprised, trying both on my signed in google account and a signed out account (to see what the algorithm thinks a freshly installed instance of google will like to see, the most "default" face of the algorithm) finding only articles on empowerment for black women such as "Black Girls Smile" which is a mental wellness company and educational resource for black women.
2 notes
·
View notes
Text
Why is the "learn to code" meme considered so offensive?
It’s classist. It’s the modern version of “let them eat cake” aimed at large swaths of people who have been left unable to economically provide for themselves due to technology in a country that provides next to nothing by the way of any real social safety net after 40+ years of sustained neoliberal attacks and increasingly punitive means-testing on what remains of the miserly, inadequate Great Society safety nets for the jobless/unemployable poor we very briefly had from 1968 to 1980.
It’s ableist. Not only does it dismiss the fact that most older poor and working class people who didn’t grow up with any exposure and access to this technology CAN’T just “learn to code” (as if it’s just so easy), at least not without a LOT of help and obtuse learning support, and dismissive of well enough to be able to get any of the entry level coding jobs which overwhelmingly go to rich young computer wiz kids who are autodidacts that seemingly grew up learning coding by osmosis, it’s extremely insulting to anyone with a learning disability when they need to be able to economically survive today and while they try to sort out their lives. Which wo just can’t do it and never will be able to no matter how hard they try. It’s being deliberately erected and maintained by the opportunity-hoarding upper-middle class to keep as many poor underprivileged people out of tech (and other professional white-collar middle class jobs) as possible so that they (and their kids) don’t have to compete against poor people for any of the good jobs that remain in post-Welfare Reform and post-NAFTA America.
It deliberately ignores real barriers to entry to tech jobs that women, minorities, older workers, the disabled, and the poor continue to face - despite all the lip service and empty promises about “diversity” and “inclusion.” Barriers, I might add, that were and are deliberately erected and maintained by the opportunity-hoarding upper-middle class to keep as many poor underprivileged people out of tech (and other professional white-collar middle class jobs) as possible so that they (and their kids) don’t have to compete against poor people for any of the good jobs that remain in post-Welfare Reform and post-NAFTA America.
“Learn to code” is survivorship bias at its worst
Saying “learn to code” also promotes survivorship bias with the same callousness exhibited by Paul Graham (founder of Y-Combinator) whose recent faux pas on Twitter caused an uproar. Graham said that anyone can bootstrap a startup and succeed economically, pointing to Airbnb as an example - which was NOT founded by three poor underprivileged youths unable to pay their rent as Graham claimed, but three upper-middle class white male Ivy League college graduates who were struggling to pay rent in one of the most expensive neighborhoods of San Francisco, which is the most expensive, gentrified coastal city in North America. Huge difference.
Learning to code is VERY hard and near-impossible for older people aged 50+ who grew up on the losing side of the Digital Divide that didn’t have the opportunity to learn any computer skills while young and who weren’t exposed to computers or even Nintendo and Atari video games (remember Pong?) unless if they were from households in the upper-middle class - the top 10–20% - and could afford those expensive toys, because there were no affordable personal home computers or Internet access available to them when they were young.
Remember, the bottom 80% of Americans - which is the overwhelming majority of the US population - weren’t even able to afford a bottom end clearance-sale special PC until 20 years after the home computer was invented and the Internet was launched. Many economically ravaged regions between the coasts still do not have high-speed Internet access today in 2019 because the infrastructure for it was never installed in those places by the telecom companies.
In areas that have been economically devastated like Erie, PA where I live - which is 100 miles away from the nearest tech meetup groups - those who could finally manage to scrape together the money to afford a bottom-end computer only had access to dial-up Internet until 2008 after Verizon DSL and Time Warner Cable (now Spectrum) cable Internet infrastructures were finally installed. But many outlying regions of Erie County still lack it and are still on dial-up and landline phones. (Yes, really!)
Even though some older people without any prior computer skills or college educations have managed to overcome tremendous obstacles in order to learn how to code in their middle-aged/older years, ageism, ableism and classism runs as rampant (if not more so) than sexism and racism in the tech industry. Older job applicants, especially women and the disabled, who are heavily discriminated against for tech jobs despite tech’s phony “diversity and inclusion” initiatives, don’t get hired in these high-paying software developer jobs after having struggled to learn basic programming skills because tech is and always has been a young rich kids’ field where older people are not wanted.
Women, older workers, the disabled, displaced homemakers/caregivers, and other traditionally marginalized people never got hired after re-training in their middle-aged years, many using up what was left of their entire life savings to pay anywhere from $13K - $30K for dev bootcamp tuition, because the overwhelmingly young affluent tech employers deemed them as “not a good culture fit” - which is really nothing more than backdoor discrimination that the tech industry has not shown any proven commitment to eliminating. Just look at the biased algorithms driving AI, which is used in everything from targeted job ads on social media sites to companies’ human resource hiring decisions to product and services sales - all of which selectively discriminate against women, the disabled, older people, long-term unemployed/chronically poor people, and non-whites for access to jobs, goods and services. This issue has not even begun to be addressed by the tech industry, despite many people raising awareness about it over the past several years.
“Learn to code”/ “anyone can learn to code” is malicious, social Darwinist, and privilege-blind
You have to have a certain degree of cognitive ability and natural-born intellectual capacity to be able to learn how to code. The average IQ among Americans in the US is 98[1]. To be able to learn how to code, it’s been estimated that you need to have a minimum IQ of 125 - which is well above average (mine is 126, but I’m also dyslexic so I really struggled with learning to code as a much older lady and never was able to get a job). Someone with a low to average IQ who struggles with basic math is not going to be successful at learning to code. And there’s not a damn thing they, or anyone else, can do about it.
Saying that “anyone can learn to code - even pre-schoolers are doing it” is not only false, it’s victim-blamey. It’s dismissive of those who can’t, and never will be able to, learn to code and who can’t be realistically expected to compete against intellectually gifted, non-learning-disabled MIT and Stanford computer science graduates for coding jobs - especially since the more technically advanced and difficult coding jobs are in AI and neuro-learning networks and those are starting to outnumber the more basic and “easier” software developer jobs.
People for whom college was never an option who struggled with learning difficulties since birth, suffered a lot of trauma during their K-12 school years as children. They were punished, riciduled, mocked and bullied by teachers, classmates, and (sadly) even family members because they couldn’t succeed in school as children - no matter how many times they sacrificed recess to get extra help with their homework from the teacher and no matter how hard they tried, only to fail again and again. They’re certainly not going to be able to succeed at learning to code and break into tech jobs as older adults. It’s too difficult and traumatizing for them, and you can’t just “positive-think” your way out of a learning disability or a low-average IQ. That’s not how reality works.
You can’t punish people out of having learning disabilities or intellectual disabilities. It’s dangerous fairy dust thinking to insist that the very real limitations posed by learning disabilities and low-average IQs will magically disappear if the learning-disabled person would just have the “right attitude” instead of “using their learning disability as a crutch”, or if they “stop making excuses” for being “lazy” and “not trying hard enough” to learn to code when they know they can’t do it. If they were really able to do it, they wouldn’t have been held back twice in elemtary school and thrown into special ed for “slow learners” the minute they couldn’t grasp algebra in 7th grade.
Telling middle-aged displaced homemakers and blue-collar workers who struggled to make it to high school - many whom were deeply traumatized in the process and dropped out - that they should just “learn to code”, and then pick up and relocate (on no money and no car) to some expensive big city on the coast where all these fantastic jobs are, is like telling someone who spent their entire life from being raised as a feral child in the hinterlands of some remote forest to “just” become a nuclear physicist so they can get a job at NASA.
Remember whom this “learn to code” meme and its variants (i.e. “just go to college”, etc.) were being aimed at. They are verbal grenades that have been lobbed by upper-middle class professionals at discarded blue-collar workers and the very poor, in real life and on online forums, starting in the 1990s. We’re talking about a much older population who had been the primary targets of these cruel elitist attacks for decades - NOT the 20-somethings that IT companies and other tech startups seek.
IT skills and coding are hard enough to learn as an older person with a STEM degree and an above-average IQ and mathematical abilities if they didn’t grow up with this technology and have any opportunities to learn it while still young enough to be desired as an employee - like the Millenials and the younger generations coming up after them.
For people of ANY age who don’t have a solid grasp on math and symbolic logic, and the mechanical ability to visualize a running machine in their head, learning to code and succeeding in tech is impossible. People like this do NOT intuitively grasp how to “see” things like this on their own - it’s too abstract. They have to be shown. And the current standard fare of coding education materials does not demonstrate to such people how to “see it.” That makes learning to code impossible for large segments of the population.
But these people vote and they vote angry. And there’s only two candidates running for president in the 2020 election who called it right: Bernie Sanders and Andrew Yang. Of those two, only one has thoroughly analyzed the problem and presented a solution (a guaranteed basic income) that can be implemented immediately to relieve deep poverty and suffering in post-Welfare Reform America: Andrew Yang.
As much as I distrust Yang because he’s a Libertarian-leaning technocrat, and dislike his Neoliberal version of a UBI plan - because $1,000/mo is not enough for a permanently unemployable poor older/disabled unmarried person to live on, and because of how Yang wants to finance his version of the UBI instead of going with a more progressive UBI plan - I cannot disagree with any part of his analysis of the problem that got us here in the first place, or the spirit of a UBI.
Over a decade ago I wrote and self-published a book titled Classism For Dimwits (“Dummies” is a registered trademark, so couldn’t use it). It’s still available as print-on-demand and offered in paperback and hardcopy version from Barnes & Noble, and as an e-book on Kindle through Amazon. In that book, I extensively discussed the hidden injuries of class, the War on the Poor, and how utterly shitty and classist it was for well-off upper-middle class people to tell all the poor single mothers being thrown off of welfare with Clinton’s Welfare Reform Act without the guarantee of a living wage job and health benefits, and all the poor displaced blue-collar workers who’ve been surplussed, losing everything in their middle-aged years at an increasing pace since the 1990’s, that if they weren’t “smart enough” to “just go to college” and become whatever they deserved to suffer in poverty and should “stop whining” and “stop blaming society for their failures.”
Nobody cared when any of these shitbombs were hurled at America’s poorest and most vulnerable women and at poor discarded blue-collar workers whom the privileged middle and upper-middle classes never had a shred of sympathy for. Only now that it’s being aimed at bright, well-educated middle class journalists is it starting to matter.
#from Jacqueline Homan of Quora#learn to code#survivorship bias#classist#classism#stupid advice#facts#probably
4 notes
·
View notes
Photo
Two AI generated cats go up on the web, up on the web. Guess who's back...back again...cats are back...tell a friend.
Okay, enough of that. Hiatus over.
Anyway, this cat was inspired by recent antisemitic twitter posts by Elon Musk. I read the news and I thought "man, that's a guy is in charge of a company that deploys technology that puts people in life-or-death situations." And I'm not just talking about the fact that Teslas are high-speed automobiles, but specifically that they are self-driving ones. It would be so easy for someone with that much power to deploy a product that didn't take steps to mitigate bias against certain demographics.
Now, I don't think a person like him is actively going to do something like that, and I'd hope that regardless, top people on the engineering team would make sure that that vision never came to fruition. But replace Elon Musk with another, more hateful CEO, in a more closely-held corporation, and suddenly the worry of dangerous bias becomes much more salient. After all, we've seen instances of machine learning behind a veil causing real damage through systemic racism before, and in even more innocuous situations (cameras couldn't capture black skin well for decades, microphones couldn't pick up women's voice clearly) technology failed the underrepresented. These are all cases where the problem was easily rectified - people just need to be aware of the problem and care to fix it.
3 notes
·
View notes
Text
Creating Your Own History: Archival Themes in "The Watermelon Woman" [Part 2]
Continued from part 1
youtube
Following her setbacks in the library, Cheryl again goes through her mother’s files in the basement. Her mother’s friend, Shirley Hamilton (played by Ira Jeffries), reveals a key clue: the Watermelon Woman’s real name was Fae Richards, which Shirley knew because Fae sang under her real name. Cheryl also learns that, like her, Fae is a “sapphic sister”—a Black lesbian woman—and was in a relationship with Martha Page, the White female director of Plantation Memories and other 1930s films. Through her research, Cheryl learns the lesson that Alta Jett, coordinator for the community-focused Black Woman in the Middle West archives project, pointed out in 1986: “if you want the history of a white man, you go to the library. If you want the history of black women, you go to the attics, the closets, and the basements.” [5]
Reprinted from The American Archivist Reviews Portal. Thanks to Rose and Stephanie for their editing of this article! It was also posted on my Wading Through the Cultural Stacks WordPress blog on Jul. 5, 2022. This review contains some spoilers for the film The Watermelon Woman.
Jolie Braun, a modern literary and manuscripts scholar, has argued that The Watermelon Woman highlights the power of archival limits, critiques how archives and libraries control access to records, and reveals power relations that undergird research in those spaces. [6] John J. Kostka, a moving image specialist, described Cheryl’s contact with the librarian in her reference interview as “frustrating.” His description is accurate: the librarian does not initially listen to Cheryl and only offers assistance and takes her seriously after he realizes that she has done her research. [7] If Cheryl had been a White woman, the librarian may have been more gracious and less hostile, instead of telling her to check the “film,” “women,” and “Black” sections in a derisive tone. [8] The librarian, by redirecting her to look in those library sections, is representative of collections reinforcing cultural bias by marginalizing views that are not White, heteronormative, and male.
Although the librarian’s stance toward Cheryl hints that librarians are gatekeepers of information rather than information providers, Cheryl fully experiences the power of the archive when she travels to the Center for Lesbian Information & Technology (C.L.I.T.) Archive. While at this collective feminist lesbian archive, a parody of the Lesbian Herstory Archive, [9] with her friends Tamara and Annie, she meets an archivist voiced by queer academic Sarah Schulman. While researching at C.L.I.T., Cheryl discovers documents and photographs of Fae, including one given to Fae’s “special friend” June Walker. Later in the film, Cheryl talks to June, who angrily denies that Fae had a relationship with Page, a White woman.
At C.L.I.T., Cheryl faces pushback from the archivist, who explains that Black lesbian materials are segregated from the rest of the collection and that their donor wanted the materials to be used “exclusively” by Black lesbians. The archivist declares that she respects Black people by crossing out any White people in the collection’s photographs. It is implied that this brazen act of record defacement was deemed “acceptable” by the collective running the archive but runs against the wishes of the donor. While the donor restricting access to Black lesbians would seem to reverse archives’ typical power dynamics, this liberatory potential is squashed by the archivist who wants to maintain power over the records.
© 2022 Burkely Hermann. All rights reserved.
[5] Darlene Clark Hine and Patrick Kay Bidelman, “Introduction: The Black Women in the Middle West Project,” in The Black Women in the Middle West Project: A Comprehensive Resource Guide Illinois and Indiana (Purdue Research Foundation: Indianapolis, Indiana, 1986), 1.
[6] Jolie Braun, "Review: Make Your Own History: Documenting Feminist & Queer Activism in the 21st Century," RBM: A Journal of Rare Books, Manuscripts, and Cultural Heritage 14, no. 1 (2013): 49, https://doi.org/10.5860/rbm.14.1.399.
[7] John J. Kostka, “Toward Transgression: The Changing Role(s) of the Postmodern Archivist,” All Access Pass: Theory + Practice, accessed March 3, 2022, https://web.archive.org/web/20190429001349/https://johnkostka.com/toward-transgression/.
[8] Jean Bessette, "Composing Historical Activism: Anecdotes, Archives, and Multimodality in Rhetorics of Lesbian History" (PhD Thesis, University of Pittsburgh, 2013), 169.
[9] Moira Donegan, “The Watermelon Woman Shows the Power of Gay History,” The New Republic, July 5, 2017, https://newrepublic.com/article/143703/watermelon-woman-shows-power-gay-history; Bessette, “Composing Historical Activism,” 184; Rebecka Taves Sheffield, "The Bedside Table Archives: Archive Intervention and Lesbian Intimate Domestic Culture," Radical History Review, no. 120 (2014): 112, https://doi.org/10.1215/01636545-2703751.
#the watermelon woman#archives#archival science#archival studies#archival#archivy#pop culture#reviews#black lesbians#lesbians#lgbtq#archivists#Youtube#donors#white women#black women#indie films#1990s films#films#power dynamics#cheryl dunye
3 notes
·
View notes
Text
Real-life examples of AI algorithms demonstrating bias and prejudice
Table of Content
Introduction
Three Real-Life Examples of AI Bias
What can we learn from all of this?
Introduction
Some say that it’s a buzzword that doesn't really mean much. Others say that it’s the cause of the end of humanity.
The truth is that artificial intelligence (AI) is starting a technological revolution, and while AI has yet to take over the world, there’s a more pressing concern that we’ve already encountered: AI bias.
What is AI bias?
AI bias is the underlying prejudice in data that’s used to create AI algorithms, which can ultimately result in discrimination and other social consequences.
Let me give a simple example to clarify the definition: Imagine that I wanted to create an algorithm that decides whether an applicant gets accepted into a university or not and one of my inputs was geographic location. Hypothetically speaking, if the location of an individual was highly correlated with ethnicity, then my algorithm would indirectly favor certain ethnicities over others. This is an example of bias in AI.
This is dangerous. Discrimination undermines equal opportunity and amplifies oppression. I can say this for certain because there have already been several instances where AI bias has done exactly that.
In this article, I’m going to share three real-life examples of when AI algorithms have demonstrated prejudice and discrimination towards others.
Three Real-Life Examples of AI Bias
1. Racism embedded in US healthcare
In October 2019, researchers found that an algorithm used on more than 200 million people in US hospitals to predict which patients would likely need extra medical care heavily favored white patients over black patients. While race itself wasn’t a variable used in this algorithm, another variable highly correlated to race was, which was healthcare cost history. The rationale was that cost summarizes how many healthcare needs a particular person has. For various reasons, black patients incurred lower health-care costs than white patients with the same conditions on average.
Thankfully, researchers worked with Optum to reduce the level of bias by 80%. But had they not interrogated in the first place, AI bias would have continued to discriminate severely.
2. COMPAS
Arguably the most notable example of AI bias is the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm used in US court systems to predict the likelihood that a defendant would become a recidivist.
Due to the data that was used, the model that was chosen, and the process of creating the algorithm overall, the model predicted twice as many false positives for recidivism for black offenders (45%) than white offenders (23%).
3. Amazon’s hiring algorithm
Amazon’s one of the largest tech giants in the world. And so, it’s no surprise that they’re heavy users of machine learning and artificial intelligence. In 2015, Amazon realized that their algorithm used for hiring employees was found to be biased against women. The reason for that was because the algorithm was based on the number of resumes submitted over the past ten years, and since most of the applicants were men, it was trained to favor men over women.
What can we learn from all of this?
It’s clear that making non-biased algorithms are hard. In order to create non-biased algorithms, the data that’s used has to be bias-free and the engineers that are creating these algorithms need to make sure they’re not leaking any of their own biases. With that said, here are a few tips to minimize bias:
The data that one uses needs to represent “what should be” and not “what is”. What I mean by this is that it’s natural that randomly sampled data will have biases because we lived in a biased world where equal opportunity is still a fantasy. However, we have to proactively ensure that the data we use represents everyone equally and in a way that does not cause discrimination against a particular group of people. For example, with Amazon’s hiring algorithm, had there been an equal amount of data for men and women, the algorithm may not have discriminated as much.
Some sort of data governance should be mandated and enforced. As both individuals and companies have some sort of social responsibility, we have an obligation to regulate our modeling processes to ensure that we are ethical in our practices. This can mean several things, like hiring an internal compliance team to mandate some sort of audit for every algorithm created, the same way Obermeyer’s group did.
Model evaluation should include an evaluation by social groups. Learning from the instances above, we should strive to ensure that metrics like the true accuracy and false positive rate are consistent when comparing different social groups, whether that be gender, ethnicity, or age.
2 notes
·
View notes
Text
A new era of empowerment: The rise of women entrepreneurs in Indian society.
Over the last few decades, India has witnessed a significant change in socio-economic landscapes. Among the most inspiring elements of transformation is the meteoric rise of women entrepreneurs who break through traditional barriers and redefine the contours of Indian business and innovation. From technology and finance to arts and crafts, women are shaping industries with their vision and leadership.
Traditionally, Indian society has been patriarchal. Women were not allowed beyond the realms of domestic roles or certain professions like teaching or nursing. However, due to economic liberalization in the 1990s and resultant policy reforms, fresh avenues have opened before women to join the workforce and newer entrepreneurship opportunities have become available. Seeds sown at the turn of the last century have germinated and are blooming today as a vigorous ecosystem of women-led enterprises.
The following factors have resulted in the emergence of women entrepreneurs in India:
1. Access to Education: A higher number of women are becoming literate and accessing high education, which is equipping them with knowledge and skills in running their own enterprises.
2. Government initiative
Initiatives such as Beti Bachao Beti Padhao, Stand-Up India, and Mudra Yojana provide financial aid and support to women entrepreneurs and encourage them. Bodies such as SEWA (Self Employed Women's Association) have been very pertinent as well.
3. Digital transformation
The internet and social media have transformed the way businesses are functioning. Through Instagram, YouTube, and Amazon platforms, women can sell their products and services across the globe.
4. Shift in Social Norms
With shifting societal attitudes, families today become supportive of the entrepreneurial ambitions of women. Women are no longer perceived as a 'secondary earner' but as an equal earner for households.
5. Inspiring Role Models
Empowering women entrepreneurs such as Nykaa's Falguni Nayar, Biocon's Kiran Mazumdar-Shaw, and Zivame's Richa Kar have inspired thousands of women to take the entrepreneurial path.
Challenges Women Entrepreneurs Face
- Gender Bias: Many women are still doubted despite how far societies have run in that, in certain industries mostly dominated by men, women are not taken seriously.
- Access to Funds: Women find it more difficult getting venture capital or loans because there is a deep-rooted bias against women in the financial ecosystem. Few women are able to reconcile the business demands and expectations from societies to take care of the family.
Effect on Society
It has deep implications for Indian society with the rise of women entrepreneurs:
- Growth in Economy: Women-led enterprises are important components of forming GDP and provide jobs by driving innovation. .
- Social Empowerment: Entrepreneurship enables women to stand financially independent with boosted confidence by overthrowing myths and inspiring others.
Community Development It is true that most of these female entrepreneurs focus on sustainable and socially responsible ventures, contributing to their local communities and the environment.
Women entrepreneurs in India are no longer just an economic phenomenon, but a social revolution. As women break the glass ceiling, they also tend to break age-old narratives and make India more inclusive and equitable. It is crucially important for the nation that they are nurtured and, above all, celebrated as one step toward making the future brighter and more prosperous.
0 notes
Text
Breaking the Glass Ceiling: Gender Discrimination in Fertility and the Labor Market in China
Image by Olha Khorimarko
Gender inequality remains a very persistent problem in China, deeply rooted in cultural traditions and social structures, leading to discrimination and inequality against women. Traditional beliefs favor boys over girls.
According to the National Bureau of Statistics, China's sex ratio will reach 111.3:100 in 2022, much higher than the international normal range of 105:100. Surveys show that about 20% of families choose to have multiple births until they have a boy because of gender preference.
The result of this bias is that a large number of girls are not born or are neglected, which not only affects the structure of the family, but also exacerbates the problem of gender inequality.
Therefore, we should take the issue of gender discrimination more seriously and make our governments fairer and better reflect the interests of the people they serve. Let's delve deeper and understand the gender inequality and gender discrimination in China.
Gender inequality issues in China fertility
Here are some experts perspectives about Gender inequality issues in China fertility:
• Dr. Yun Zhou (2023) argues that without a genuine commitment to gender equity and reproductive rights, China’s birth-incentivizing measures are unlikely to result in any sustained fertility recovery. Instead, these pro-natalist efforts will reinforce existing patterns of gender inequality.
• According to Lu Pin (2024) in traditional Confucian culture, women were expected to bear "heirs" for the family. In contemporary China, under Communist Party rule, it is seen as a woman's duty to contribute to a "high quality" next generation, aligning with the state's fertility control policies.
These are just a few perspectives. There are also many different professionals who have expressed concerns about gender inequality issues in China fertility.
Solutions to change gender inequality in fertility in China
• Policy Reforms: Implement policies that promote gender equality in family planning and fertility treatments. This includes equal access to fertility services and support for both men and women.
• Legislation: Enforce laws that prohibit discrimination based on gender in reproductive health services and employment. Strengthen legal frameworks to protect women’s and men’s rights equally.
• Healthcare Access: Improve access to comprehensive reproductive health care for both genders, ensuring equitable treatment and support.
Gender inequality issues in Chinese labor market
The following arethe most prominent gender inequality issues in China labor market:
• Wage Gap: “Women in China earn, on average, 30% less than their male counterparts. Despite having similar qualifications and experience, female employees face significant pay disparities in nearly every industry” (China National Bureau of Statistics, 2023).
• Employment Opportunities: “Women often encounter barriers when applying for jobs, especially in male-dominated fields like technology and engineering” (International Labour Organization, 2023).
• Pregnancy Discrimination:“Pregnant women or those with young children face discrimination in hiring and promotions. Employers may hesitate to hire or promote women they perceive as having potential family-related absences” (Human Rights Watch, 2023).
Gender inequality affects Chinese women in a variety of ways, including but not limited to wage gaps, employment opportunities, pregnancy discrimination, workplace harassment, career advancement, and unpaid labor.
Image by Tommy
Solutions to change gender inequality in labor market in China
• Support for Female Entrepreneurs: Offer financial and advisory support to women starting their own businesses. Encourage entrepreneurial initiatives and provide resources to help women succeed in the business world.
• Public Awareness Campaigns: Conduct campaigns to challenge stereotypes and biases about gender roles in the workplace. Promote positive role models and success stories of women in various professions.
Image by Mariya Brussevich and Ms. Era Dabla-Norris
We must shatter the glass ceiling for gender inequality between men and women as well as gender discrimination in China.
It's time to change the problem of gender inequality and gender discrimination in China!
Sources:
1. Mariya, B. (2021). China's Rebalancing and Gender Inequality. International Monetary Fund.
2. Pin, L. (2024, February 13). How gender inequality is fueling China's fertility crisis. Trtworld.com; TRT WORLD.
https://www.trtworld.com/opinion/how-gender-inequality-is-fueling-chinas-fertility-crisis 16960869
3. Statista. (2017). China: population by gender 2017 | Statista. Statista; Statista.
https://www.statista.com/statistics/251129/population-in-china-by-gender/
4. Yun, Z. (2023, March 8). Gender Inequality: A Key to Understand China's Population Decline - Australian Institute of International Affairs. Australian Institute of International Affairs.
https://www.internationalaffairs.org.au/australianoutlook/gender-inequality-a-key-to-understand-chinas-population-decline
5. China National Bureau of Statistics. (2023). Report on gender pay gap in China.http://www.stats.gov.cn/
6. International Labour Organization. (2023). *Women in male-dominated fields: Barriers and opportunities*. https://www.ilo.org/
7. Human Rights Watch. (2023). *Discrimination against pregnant women in the workplace*. Retrieved from https://www.hrw.org/
1 note
·
View note